69 research outputs found
Real time change-point detection in a nonlinear quantile model
Most studies in real time change-point detection either focus on the linear
model or use the CUSUM method under classical assumptions on model errors. This
paper considers the sequential change-point detection in a nonlinear quantile
model. A test statistic based on the CUSUM of the quantile process subgradient
is proposed and studied. Under null hypothesis that the model does not change,
the asymptotic distribution of the test statistic is determined. Under
alternative hypothesis that at some unknown observation there is a change in
model, the proposed test statistic converges in probability to . These
results allow to build the critical regions on open-end and on closed-end
procedures. Simulation results, using Monte Carlo technique, investigate the
performance of the test statistic, specially for heavy-tailed error
distributions. We also compare it with the classical CUSUM test statistic
Adaptive Fused LASSO in Grouped Quantile Regression
This paper considers quantile model with grouped explanatory variables. In
order to have the sparsity of the parameter groups but also the sparsity
between two successive groups of variables, we propose and study an adaptive
fused group LASSO quantile estimator. The number of variable groups can be
fixed or divergent. We find the convergence rate under classical assumptions
and we show that the proposed estimator satisfies the oracle properties
Quantile regression in high-dimension with breaking
The paper considers a linear regression model in high-dimension for which the
predictive variables can change the influence on the response variable at
unknown times (called change-points). Moreover, the particular case of the
heavy-tailed errors is considered. In this case, least square method with LASSO
or adaptive LASSO penalty can not be used since the theoretical assumptions do
not occur or the estimators are not robust. Then, the quantile model with SCAD
penalty or median regression with LASSO-type penalty allows, in the same time,
to estimate the parameters on every segment and eliminate the irrelevant
variables. We show that, for the two penalized estimation methods, the oracle
properties is not affected by the change-point estimation. Convergence rates of
the estimators for the change-points and for the regression parameters, by the
two methods are found. Monte-Carlo simulations illustrate the performance of
the methods
Two tests for sequential detection of a change-point in a nonlinear model
In this paper, two tests, based on CUSUM of the residuals and least squares
estimation, are studied to detect in real time a change-point in a nonlinear
model. A first test statistic is proposed by extension of a method already used
in the literature but for the linear models. It is tested the null hypothesis,
at each sequential observation, that there is no change in the model against a
change presence. The asymptotic distribution of the test statistic under the
null hypothesis is given and its convergence in probability to infinity is
proved when a change occurs. These results will allow to build an asymptotic
critical region. Next, in order to decrease the type I error probability, a
bootstrapped critical value is proposed and a modified test is studied in a
similar way. Simulation results, using Monte-Carlo technique, for nonlinear
models which have numerous applications, investigate the properties of the two
statistic tests.Comment: 37 page
Empirical likelihood test for high-dimensional two-sample model
A non parametric method based on the empirical likelihood is proposed for
detecting the change in the coefficients of high-dimensional linear model where
the number of model variables may increase as the sample size increases. This
amounts to testing the null hypothesis of no change against the alternative of
one change in the regression coefficients. Based on the theoretical asymptotic
behaviour of the empirical likelihood ratio statistic, we propose, for a fixed
design, a simpler test statistic, easier to use in practice. The asymptotic
normality of the proposed test statistic under the null hypothesis is proved, a
result which is different from the law for a model with a fixed
variable number. Under alternative hypothesis, the test statistic diverges. We
can then find the asymptotic confidence region for the difference of parameters
of the two phases. Some Monte-Carlo simulations study the behaviour of the
proposed test statistic
- …